Recent advances in self-supervised visual representation learning have paved the way for unsupervised methods tackling tasks such as object discovery and instance segmentation. However, discovering objects in an image with no supervision is a very hard task; what are the desired objects, when to separate them into parts, how many are there, and of what classes? The answers to these questions depend on the tasks and datasets of evaluation. In this work, we take a different approach and propose to look for the background instead. This way, the salient objects emerge as a by-product without any strong assumption on what an object should be. We propose FOUND, a simple model made of a single $conv1\times1$ initialized with coarse background masks extracted from self-supervised patch-based representations. After fast training and refining these seed masks, the model reaches state-of-the-art results on unsupervised saliency detection and object discovery benchmarks. Moreover, we show that our approach yields good results in the unsupervised semantic segmentation retrieval task. The code to reproduce our results is available at https://github.com/valeoai/FOUND.
translated by 谷歌翻译
We propose a new self-supervised method for pre-training the backbone of deep perception models operating on point clouds. The core idea is to train the model on a pretext task which is the reconstruction of the surface on which the 3D points are sampled, and to use the underlying latent vectors as input to the perception head. The intuition is that if the network is able to reconstruct the scene surface, given only sparse input points, then it probably also captures some fragments of semantic information, that can be used to boost an actual perception task. This principle has a very simple formulation, which makes it both easy to implement and widely applicable to a large range of 3D sensors and deep networks performing semantic segmentation or object detection. In fact, it supports a single-stream pipeline, as opposed to most contrastive learning approaches, allowing training on limited resources. We conducted extensive experiments on various autonomous driving datasets, involving very different kinds of lidars, for both semantic segmentation and object detection. The results show the effectiveness of our method to learn useful representations without any annotation, compared to existing approaches. Code is available at \href{https://github.com/valeoai/ALSO}{github.com/valeoai/ALSO}
translated by 谷歌翻译
接受经验风险最小化(ERM)训练的机器学习模型的预测性能可以大大降解分配变化。在训练数据集中存在虚假相关性的存在导致ERM训练的模型在对不存在此类相关性的少数群体评估时表现出很高的损失。已经进行了广泛的尝试来开发改善最差的鲁棒性的方法。但是,他们需要每个培训输入的组信息,或者至少需要一个带有组标签的验证设置来调整其超参数,这可能是昂贵的或未知的。在本文中,我们应对在培训或验证期间没有小组注释的情况下提高组鲁棒性的挑战。为此,我们建议根据``识别''模型提取的特征的革兰氏集矩阵将训练数据集分为组,并根据这些伪组应用强大的优化。在不可用的小组标签的现实情况下,我们的实验表明,我们的方法不仅可以改善对ERM的稳健性,而且还优于所有最近的基线
translated by 谷歌翻译
最近对隐含形状表示的兴趣日益增长。与明确的陈述相反,他们没有解决局限性,他们很容易处理各种各样的表面拓扑。为了了解这些隐式表示,电流方法依赖于一定程度的形状监督(例如,内部/外部信息或距离形状知识),或者至少需要密集点云(以近似距离 - 到 - 到 - 形状)。相比之下,我们介绍{\方法},一种用于学习形状表示的自我监督方法,从可能极其稀疏的点云。就像在水牛的针问题一样,我们在点云上“掉落”(样本)针头,认为,静统计地靠近表面,针端点位于表面的相对侧。不需要形状知识,点云可以高稀疏,例如,作为车辆获取的Lidar点云。以前的自我监督形状表示方法未能在这种数据上产生良好的结果。我们获得定量结果与现有的形状重建数据集上现有的监督方法标准,并在Kitti等硬自动驾驶数据集中显示有前途的定性结果。
translated by 谷歌翻译
我们在一个或多个镜头中介绍FacialFilmroll,一种用于空间和时间一致地编辑面的解决方案。我们建立在未包装马赛克[Rav-Acha等。2008年]通过专门谈谈。我们利用最近的技术适应单眼视频的3D面部模型(i)提高了Edition的Mosaic的质量,并允许从一个拍摄的射击自动转移到同一演员的其他镜头。我们解释了FacialFilmroll如何集成在生产后设施中。最后,我们在高分辨率视频上使用FacialFilmroll提供视频编辑结果。
translated by 谷歌翻译
虽然对2D图像的零射击学习(ZSL)进行了许多研究,但其在3D数据中的应用仍然是最近且稀缺的,只有几种方法限于分类。我们在3D数据上介绍了ZSL和广义ZSL(GZSL)的第一代生成方法,可以处理分类,并且是第一次语义分割。我们表明它达到或胜过了INTEMNET40对归纳ZSL和归纳GZSL的ModelNet40分类的最新状态。对于语义分割,我们创建了三个基准,用于评估此新ZSL任务,使用S3DIS,Scannet和Semantickitti进行评估。我们的实验表明,我们的方法优于强大的基线,我们另外为此任务提出。
translated by 谷歌翻译
View-dependent effects such as reflections pose a substantial challenge for image-based and neural rendering algorithms. Above all, curved reflectors are particularly hard, as they lead to highly non-linear reflection flows as the camera moves. We introduce a new point-based representation to compute Neural Point Catacaustics allowing novel-view synthesis of scenes with curved reflectors, from a set of casually-captured input photos. At the core of our method is a neural warp field that models catacaustic trajectories of reflections, so complex specular effects can be rendered using efficient point splatting in conjunction with a neural renderer. One of our key contributions is the explicit representation of reflections with a reflection point cloud which is displaced by the neural warp field, and a primary point cloud which is optimized to represent the rest of the scene. After a short manual annotation step, our approach allows interactive high-quality renderings of novel views with accurate reflection flow. Additionally, the explicit representation of reflection flow supports several forms of scene manipulation in captured scenes, such as reflection editing, cloning of specular objects, reflection tracking across views, and comfortable stereo viewing. We provide the source code and other supplemental material on https://repo-sam.inria.fr/ fungraph/neural_catacaustics/
translated by 谷歌翻译
Analogical proportions compare pairs of items (a, b) and (c, d) in terms of their differences and similarities. They play a key role in the formalization of analogical inference. The paper first discusses how to improve analogical inference in terms of accuracy and in terms of computational cost. Then it indicates the potential of analogical proportions for explanation. Finally, it highlights the close relationship between analogical proportions and multi-valued dependencies, which reveals an unsuspected aspect of the former.
translated by 谷歌翻译
Deep learning has emerged as an effective solution for solving the task of object detection in images but at the cost of requiring large labeled datasets. To mitigate this cost, semi-supervised object detection methods, which consist in leveraging abundant unlabeled data, have been proposed and have already shown impressive results. However, most of these methods require linking a pseudo-label to a ground-truth object by thresholding. In previous works, this threshold value is usually determined empirically, which is time consuming, and only done for a single data distribution. When the domain, and thus the data distribution, changes, a new and costly parameter search is necessary. In this work, we introduce our method Adaptive Self-Training for Object Detection (ASTOD), which is a simple yet effective teacher-student method. ASTOD determines without cost a threshold value based directly on the ground value of the score histogram. To improve the quality of the teacher predictions, we also propose a novel pseudo-labeling procedure. We use different views of the unlabeled images during the pseudo-labeling step to reduce the number of missed predictions and thus obtain better candidate labels. Our teacher and our student are trained separately, and our method can be used in an iterative fashion by replacing the teacher by the student. On the MS-COCO dataset, our method consistently performs favorably against state-of-the-art methods that do not require a threshold parameter, and shows competitive results with methods that require a parameter sweep search. Additional experiments with respect to a supervised baseline on the DIOR dataset containing satellite images lead to similar conclusions, and prove that it is possible to adapt the score threshold automatically in self-training, regardless of the data distribution.
translated by 谷歌翻译
A paper of Alsinglawi et al was recently accepted and published in Scientific Reports. In this paper, the authors aim to predict length of stay (LOS), discretized into either long (> 7 days) or short stays (< 7 days), of lung cancer patients in an ICU department using various machine learning techniques. The authors claim to achieve perfect results with an Area Under the Receiver Operating Characteristic curve (AUROC) of 100% with a Random Forest (RF) classifier with ADASYN class balancing over sampling technique, which if accurate could have significant implications for hospital management. However, we have identified several methodological flaws within the manuscript which cause the results to be overly optimistic and would have serious consequences if used in a clinical practice. Moreover, the reporting of the methodology is unclear and many important details are missing from the manuscript, which makes reproduction extremely difficult. We highlight the effect these oversights have had on the result and provide a more believable result of 88.91% AUROC when these oversights are corrected.
translated by 谷歌翻译